Neural Radiance Fields (NeRF) methods have proved effective as compact, high-quality and versatile representations for 3D scenes, and enable downstream tasks such as editing, retrieval, navigation, etc. Various neural architectures are vying for the core structure of NeRF, including the plain Multi-Layer Perceptron (MLP), sparse tensors, low-rank tensors, hashtables and their compositions. Each of these representations has its particular set of trade-offs. For example, the hashtable-based representations admit faster training and rendering but their lack of clear geometric meaning hampers downstream tasks like spatial-relation-aware editing. In this paper, we propose Progressive Volume Distillation (PVD), a systematic distillation method that allows any-to-any conversions between different architectures, including MLP, sparse or low-rank tensors, hashtables and their compositions. PVD consequently empowers downstream applications to optimally adapt the neural representations for the task at hand in a post hoc fashion. The conversions are fast, as distillation is progressively performed on different levels of volume representations, from shallower to deeper. We also employ special treatment of density to deal with its specific numerical instability problem. Empirical evidence is presented to validate our method on the NeRF-Synthetic, LLFF and TanksAndTemples datasets. For example, with PVD, an MLP-based NeRF model can be distilled from a hashtable-based Instant-NGP model at a 10X~20X faster speed than being trained the original NeRF from scratch, while achieving a superior level of synthesis quality. Code is available at https://github.com/megvii-research/AAAI2023-PVD.
translated by 谷歌翻译
随着自我监督学习(SSL)的成功,它已成为一种主流范式,可以从自我监督预定的预计模型中进行微调以提高下游任务的性能。但是,我们发现当前的SSL模型在执行低位量化时遭受严重的准确性下降,禁止其在资源受限应用程序中的部署。在本文中,我们提出了一种称为协同自我监督和量化学习(SSQL)的方法,以预处理量化量化的自我监督模型,从而有助于下游部署。 SSQL以自我监督的方式对比量化和完整的精度模型的特征,在每个步骤中随机选择了量化模型的位宽度。 SSQL不仅在量化较低的位宽度时显着提高了准确性,而且在大多数情况下都提高了完整精度模型的准确性。通过仅培训一次,SSQL可以同时在不同的位宽度上受益于各种下游任务。此外,在没有额外的存储开销的情况下,可以实现位宽度的灵活性,在训练和推理过程中只需要一份重量。我们理论上分析了SSQL的优化过程,并在各种基准测试中进行详尽的实验,以进一步证明我们方法的有效性。我们的代码可从https://github.com/megvii-research/ssql-eccv2022获得。
translated by 谷歌翻译
在所需的姿势中绘制人物的图像是动漫制作中必不可少但费力的任务。在本文中,我们介绍了协作神经渲染〜(CONR)方法,以从字符表中可用的一些任意摆姿势的参考图像中创建新图像。通常,动漫人物的身体形状的高度多样性违反了像SMPL这样的现实世界人体的普遍身体模型的利用。为了克服这个困难,Conr使用紧凑且易于攻击的地标编码,以避免在管道中创建统一的紫外线映射。此外,使用特征空间跨视图密集的对应关系和翘曲在特殊设计的神经网络构建体中使用多个参考图像时,Conr的性能可以显着提高。此外,我们收集了一个字符表数据集,该数据集包含700,000多个手绘和合成的姿势图像,以促进该领域的研究。
translated by 谷歌翻译
本文报告了我们针对多媒体VICO 2022对话式头部生成挑战的解决方案,该挑战旨在根据音频和参考图像生成生动的面对面对话视频。我们的解决方案专注于使用正则化并组装高视觉质量渲染器的广义音频对手驱动器。我们仔细调整了行为的音频模型,并使用我们的前后背景融合模块进行后制作视频。我们在官方排名中的Talking Head Generation Track中获得了聆听校长曲目的第一名。我们的代码将发布。
translated by 谷歌翻译
网络量化显着降低了模型推理复杂性,并且已广泛用于现实世界部署。然而,大多数现有量化方法已经开发并主要测试并测试卷积神经网络(CNN),并且当应用于基于变压器的架构时遭受严重的降级。在这项工作中,我们提出了一种系统方法,以降低量化变压器的性能下降和推理复杂性。特别是,我们提出了两种规模(PTS)的权力以以硬件友好的方式处理LAbernorm输入的严重频道间变化。此外,我们提出了可以维持注意力映射的极端不均匀分布的log-int-softmax(LIS),同时通过使用4位量化和比特速度操作员简化推断。关于各种变压器的架构和基准测试的综合实验表明,我们的方法在使用Leference Maps中使用甚至更低的位宽度时,我们的方法始终以前的性能。例如,我们在Imagenet上达到85.17%的高精度,51.4地图与Coco上的级联面罩R-CNN(Swin-S)。据我们所知,我们是第一个在完全量化的视觉变压器上实现可比准确性降级(〜1%)的最初。代码可在https://github.com/linyang-zhh/fq-vit使用。
translated by 谷歌翻译
深度神经网络的计算能力的巨大要求是他们真实世界应用的主要障碍。许多最近的应用特定集成电路(ASIC)芯片特征专用于神经网络加速的硬件支持。然而,由于ASICS多年来发展,他们不可避免地通过神经结构研究的最新发展出现。例如,变换器网络在许多流行芯片上没有本机支持,因此难以部署。在本文中,我们提出了一系列神经网络的拱门,这些网络唯一由距离Asics的大多数架构有效支持的运营商。当产生弓形网时,通过无标记的块块模型蒸馏以逐步的方式消除较少的普通网络结构,如层归一化和嵌入层,同时同时执行Sub-八比特量化以最大化性能。机器翻译和图像分类任务的经验结果确认我们可以将最新的发发的神经架构转换为快速运行和准确的拱网,准备部署多个大规模生产的ASIC芯片。代码将在https://github.com/megvii-research/arch-et栏中提供。
translated by 谷歌翻译
我们提出了一种用于视频帧插值(VFI)的实时中流估计算法。许多最近的基于流的VFI方法首先估计双向光学流,然后缩放并将它们倒转到近似中间流动,导致运动边界上的伪像。RIFE使用名为IFNET的神经网络,可以直接估计中间流量从粗细流,速度更好。我们设计了一种用于训练中间流动模型的特权蒸馏方案,这导致了大的性能改善。Rife不依赖于预先训练的光流模型,可以支持任意时间的帧插值。实验表明,普里埃雷在若干公共基准上实现了最先进的表现。\ url {https://github.com/hzwer/arxiv2020-rife}。
translated by 谷歌翻译
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
translated by 谷歌翻译
Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Despite being a promising approach, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, Preference-based Recommender systems (PrefRec), which allows RL recommender systems to learn from preferences about users' historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. Extensive experiments are conducted on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.
translated by 谷歌翻译
单元实例分割是一项旨在针对图像中每个单元格的联合检测和分割的新任务。最近,在此任务中应用了许多实例细分方法。尽管取得了巨大的成功,但仍然存在两个主要弱点,这是由于定位细胞中心点的不确定性而引起的。首先,可以很容易地将密集的填充细胞识别到一个细胞中。其次,细胞的细胞很容易被识别为两个细胞。为了克服这两个弱点,我们提出了一个基于多控制回归指南的新细胞实例分割网络。借助多功能回归指导,该网络具有不同视图中每个单元格的能力。具体而言,我们首先提出了一种高斯指导注意机制,以使用高斯标签来指导网络的注意力。然后,我们提出了一个点回归模块,以帮助细胞中心的回归。最后,我们利用上述两个模块的输出来进一步指导实例分割。借助多轮回归指导,我们可以充分利用不同区域的特征,尤其是细胞的中心区域。我们在基准数据集,DSB2018,CA2.5和SCIS上进行了广泛的实验。令人鼓舞的结果表明,我们的网络实现了SOTA(最先进的)性能。在DSB2018和CA2.5上,我们的网络超过1.2%(AP50)。尤其是在SCIS数据集上,我们的网络的性能较大(AP50高3.0%)。可视化和分析进一步证明了我们提出的方法是可以解释的。
translated by 谷歌翻译